 This is something that we have on any system upon which rotation operators act. The general logic is that if you have rotation operators that act on your quantum mechanical system, by restricting to infinitesimal rotations, you can extract their infinitesimal generators, which are the vector of permission operators, J. And then from there by diagonalizing the finding the simultaneous eigenstates of J squared and J z, we get the standard angular momentum basis, which we denote by gamma J m. The gamma here is an extra index which is needed in case J squared and J z by themselves don't form a complete set and therefore it would be necessary to introduce an extra index to resolve the generalities. Gamma in fact can often times be an extra, the eigenvalues of an extra set of observables, which are unique to add to J squared and J z in order to get a complete set of commuting observables. In any case, once we've got these standard angular momentum basis, I mentioned by the way that it's more of a normal basis like this, once we have this then it's easy to see what the action of the angular momentum operators is on this basis. J squared and J z are particularly simple because by construction the basis is an eigenmesus of the amount of those operators, so J squared just brings out its eigenvalue, which is J j plus one, and J z brings out its eigenvalue which is m. By the way here today I would set h bar equal to one, this is just a safe writing really for simplicity. Likewise, if you apply J plus and minus, what you get is the famous square root, which I assume you had to memorize in other graduate schools, so I've been seeing it before, multiply times the basis vector in which the m-value has been raised to lower, these are the raising and lowering operators. And then finally one thing to notice is that the J x and J y, the remaining two components of the angular momentum are linear combinations of J plus and minus, so if you know those matrix elements or if you know the action of J plus and minus, then that's easy to obtain the action of J x and J y, and therefore you get all three components of J acting on this standard angular momentum basis. Alright, so that's the basic setup of this. Maybe a further remark about this is just to remind you that the magnetic quantum number, even being the quantum number of the operator J z, is something that depends on the orientation of the system. So if you change the orientation by applying a rotation operator, for example, you're going to expect that the different m-values will get mixed up amongst themselves, you get linear combinations of different m-values. We'll see in a moment that's exactly what happens. Whereas J is the eigenvalue of an operator J squared, which is rotationally invariant, and so that actually doesn't change in the rotation. So this is a point I'll reiterate later on in a few moments. Anyway, this is the structure of these actions of these angular momentum operators on the standard angular momentum basis. One of the things you can notice right away from these formulas is that these angular momentum operators don't change the values of gamma and J, gamma and J on one side. It's the same as that on the other side. In fact, some of them don't even change the value of m, as you can see here, the m is the same on both sides, but the raising and lowering operators certainly do change the value of m. So we can say in general that angular momentum operators leave the gamma and J indices alone, but they change the m-values. Again, this is related to the fact that they change the orientation, but not the values of any rotational invariance. All right, another way of looking at this is to construct the actual matrix elements of the angular momentum operators in the standard angular momentum basis. I mean, the significance of the angular momentum basis is that first of all, it exists for virtually any physical system because you can apply rotations to anything, no matter what it is. And so you can construct the standard angular momentum basis, and it's particularly convenient for studying properties of rotations and other issues. All right, in other words, it exists for essentially any physical system as basis. If making more or less useful is particularly useful in metallic molecular and nuclear physics where we have typically isolated systems where the energy is not as independent of the orientation, not always, of course. It depends on whether the system is interacting with external fields. But even if it is, this is still a useful basis to consider. All right, so to go back to what I was saying a moment ago, let's now look at the matrix elements of these angular momentum operators. To do this, allow you to take a bra, which I'll call gamma prime, j prime, and m prime, like this, and sandwich this on the left of these operators to create matrix elements. And if we do, let me put it on the list of the subreddit classes and the last time, so we could do work on them before. If we do, we get this. For example, for j squared, gamma prime, j prime, m prime, and then sandwiched on a j squared, gamma, j and m on the right-hand side, an arbitrary matrix element of j squared with respect to the standard angular momentum basis. This is, first of all, a chronicle of delta, gamma, gamma prime, a chronicle of delta, j, j prime, gamma, gamma, this is a gamma prime, gamma, a chronicle of delta, j prime, j, because the prime is on the left here and the n prime is on the right. And then what's left over is j times j plus 1 times the chronicle of delta, and the magnetic quantum numbers. Likewise, if I do this for a gamma prime, j prime on the left, m prime, and then jz in the middle, and then gamma, j, m on the right, this is the same, delta, gamma, prime, gamma, delta, j prime, j, and then we get n times delta, n prime. These are diagonal. These are diagonal in the n quantum number because they're, because they're eigen, they're, they're, they're the, they're letting the operators act on their own eigen, eigen states. If we finally take the case of the raising and lowering operators and sandwich these two states on the both sides, like this, gamma, j, m, we can write it this way as delta, gamma, prime, gamma, delta, j prime, j, times the famous square root, j minus plus m, and j plus or minus m plus 1. And then we have delta, m, m prime, m plus or minus 1. If we take the raising and lowering, these are the main matrix elements that occur here. And as you see, there's a common factor of delta, gamma, prime, gamma, j prime, j in all of these cases. This is again the way of saying that these operators don't change the values of either gamma or j. The matrix elements are zero if the m and j are not equal on the two sides. All right. Now, the, this gives a general structure of the matrix elements of rotation of, of negative momentum rotation operators in, of any generality. To be general about this, let me make this following notation. Let's let x be equal to any function of the angular momentum operators, which is j vector here, like this. And just to give you a whole list of examples, this would include the components of angular momentum, which is j x, j y, and j z. I called them j 1, 2, and 3 in last lecture, but it's the same as j x, j y, and j z. Also the raising and lowering operators, j plus and minus, also j squared, because we are an operator. But in addition, let's not also forget the rotation operators, which are e to the minus i theta times n hat dotted into j, which is another important operator that's a function of j. Those are the rotation operators. Did I say this earlier? I can't remember. I'm going to set h bar equals to 1 here in lecture today. This is the same writing. It's easy to restore the h bars. Anyway, let's let x have, have be any one of these, any function of j, and these are the main examples here. And then if we look at the matrix elements of x in the standard angular momentum basis, so I'll put gamma prime, j prime, m prime on one side, and gamma, j, m on the other. The general structure of it is this, is that it's diagonal and gamma, delta, gamma prime, gamma, diagonal and j, j prime, j, delta. Then what's left over is that it's multiplied by times the function of j and m and m prime. And so it's this final function that contains the essence of the matrix elements of the operator in question. Now as far as this final function is concerned, allow me to write this in the following way. I'll write it as j prime m prime, excuse me, I'll write it as this way, j m prime on the left, and x in the middle, and then j m on the right. So the two j values are the same. Here the n prime are a lot to be different, and there's no gamma pairing at all. This is just notation if you like for this function, but of course I've written down the difference in matrix element and the question is, can we interpret this quantity here as a matrix element? There's a couple of different points of view on this and I'll try to explain them. In the first place, just to write j m does not specify a vector of the standard A to M basis because I wrote the gamma here. The reason for doing that is that in the first place the right hand side is diagonally gamma, but moreover the answer that comes out here doesn't depend on the value of gamma. So if you like I could just say j m here is the abbreviation for gamma j m in which I drop the gamma because the answer doesn't depend on it anyway. That's one point of view on this. Another thing to say is that since the answer is diagonal in j, I might as well make the two j values and the two sides equal as I've done here and the more of a possibility of them being different. So this is what I mean. This contains the n prime of the matrix element that remains. These matrices that you see are even sometimes diagonal in n, but not always. So sometimes we're going to say that in general angular momentum operators are not diagonal in n and this is certainly true for the rotation operators as we'll see in a moment. All right. Now, there's another point of view on this as well. This has to do with the concept of invariant subspaces. Let me remind you that if you have an operator that's acting like you're over in space, then a subspace is invariant if the operator takes any vector in that space and maps it into another vector in the same space. Well, since these angular momentum operators don't change the value of gamma and j, it means that the subspace in which gamma and j is fixed is an invariant subspace. Let's look at that subspace. Let me call this e sub gamma and j. This is the subspace of our entire Hilbert space. So this is the subspace to write it out in words with fixed values of gamma and j in our original Hilbert space. Now, it's easy to write down the basis in this space. The basis in this space is just the collection of vectors in the standard angular momentum basis, gamma and jm, in which gamma and j are fixed and m is allowed to take on any value you want. The values that you can find possible are invariant ranging from minus j to plus j. Like this, if I put a brackets around this to indicate the set of vectors, that's the basis vector of the standard space. And you can see how many basis vectors there are with 2j plus 1 of them. And so the dimensionality of e gamma j is equal to 2j plus 1. Is that dependent to gamma? All right. So another point of view on this final notation here is that these are the matrix elements of the operator x inside one of these invariant sub-spaces e gamma j in which we just suppressed the gamma index. By the way, there's another point of view on this as well. Let's just say that these matrix elements here and here could be the matrix elements of the angular momentum operator on any of these e gamma j's for any system whatsoever, not necessarily one you started with. It could be, for example, a spin system where you don't even need the index cannon. And the reason for that is is that these things depend only, the matrix elements depend only in angular momentum commutation relations and not otherwise on the physics of the system. And the reason for that is is that ultimately this all traces back to the rotation operators which describe the geometry of the Euclidean space. And that's why it's independent of this particular physics of the physical interpretation of the angular momentum. And any other kind of angular momentum one. These are, in fact, universal matrices that apply for all problems involving angular momentum. Now, as I mentioned, this space e gamma j is invariant under the action of any rotation operator or any function of j which includes the rotation operators but also leaves as well. This is an invariant subspace. And these invariant subspaces are particularly important in angular momentum theory. Roughly speaking, the reason for this is that if you take your system and you rotate it, you're changing its orientation but you're not changing the value of any of the observables inside their rotational invariant. For example, in an invariated system the energy is independent of the orientation so you don't change the energy eigenvalue if you rotate it. What you will do is you'll mix up that's a say you'll take some linear combination of these basic vectors in this in this subspace and transform them into linear combinations of themselves to create a new vector inside the same subspace. So that's why this is an invariant subspace. The subspace is something else. It's also what's called irreducible. And so a longer name for it is an invariant irreducible subspace. Do you have questions somewhere? This is a name for it. If you ever study rotations and group theory in more detail you'll certainly hear a lot about this terminology, irreducible subspaces. For our purposes, I'll just use it as language to describe these spaces. I'll call them for sure irreducible subspaces. And when you hear that word what you should think of is a subspace that is created by taking some stretch state operators to create all 2j plus 1 spaces that can be created out of it. These are also subspaces that contain the vectors that are obtained by taking some given vector and rotating them in the same plane. You see the original hope space indeed composes into a actually direct sum of these irreducible subspaces for different values of gamut j and they're orthogonal to one another. That's the effective rotations. Now the so the result is is that if you want to make a abbreviated presentation of the matrices, the matrix elements of these operators it's best to do it just in this form and more fully you can put it in one of the deltas and then you get the full form of the matrix elements for accessing any of these operators. Allow me to actually present some of these matrices and represent these operators for different examples. The simplest case is a case where j is equal to 0. If we do this then the basis vectors which I'll just write as jm is suppressed in the aminel there's only one possible value for either j or m is namely 0 and 0. j is equal to 0 and n has only one value. And the dimension of the space is equal to 2j plus 1 which is equal to 1 in one dimensional space. In this case for example if I take jc and let it act on 0,0 which just brings out 0 so the answer is 0. If I let j plus or minus act on 0 of course that raises or lowers the m value but the only m value possible is 0 so it annihilates both of them and you get 0 again and this implies that jx and jy acting on these when this takes 0,0 is also equal to 0. All three components of angular momentum act on act on the vector 0,0 just giving 0 as an answer. So if we want a sandwich on the other side of the bra which in this case has to be 0,0 there's only one of them on any of these. What we get is 1 by 1 matrices and what we have to say then is that jx,jy and jz are just given by the 1 by 1 matrix containing 0 it's pretty simple for a spin or a system that has a total angular momentum of 0 angular momentum of matrices and so on to call 0s. Now let's take the case j equals 1 half and I don't have both here in this little box this is the next value of j up in this case the vector's jm there's two of them there's a 1 half, 1 half and there's a 1 half minus 1 half and jz is diagonal in the magnetic quantum number and it just brings out the m value so right away you can write it so well let's see first of all the dimension of the space is equal to 2 as you can see the dimension is 2j plus 1 so if j is a half the dimension is 2 so we're dealing with a case we've dealt with before it's been a half systems and so the matrices that represent the operators are going to be 2 by 2 matrices but the case of jz is particularly simple because you just get the magnetic quantum number of m and the diagonal which is either 1 half or minus 1 half times 0 that's the matrix for jz as far as the matrix for j plus is concerned well here's what we do to find that let's take the the stretch state which is 1 half 1 half if we apply j plus to that you get 0 because you can't stretch it anymore and if you take the anti-stretch state 1 half minus 1 half apply j plus to that and work out the famous square root in that case you'll find that it's 1 so this converts into it raises it the m value up to the stretch state of 1 half 1 half the result of the matrix j plus that's the square root is zero zero 1 zero if you see not symmetric it's not a matrix but it turns into that anyway once we've got j plus then j minus is easy because it's just the transpose I mean it's really the revision conjugate since j plus is real then j minus is just the transpose of it but then from that it's easy to get jx which is 1 half of j plus plus j minus so that's 1 half of 0 1 1 0 and j y is equal to 1 half 1 over 2 i excuse me minus j minus 0 1 minus 1 like this and the effect of this is that is that the j vector is the matrix form is 1 half sigma this is the same identification between negative momentum and poly matrices that we made earlier in talking about spin when it has systems so those are examples for about j equals 0 and j equals 1 half of the matrices for negative momentum operators if you go into the case of j which is 1 then there's 3 there are 3 states so the states jm now there's there's a stretch state which is 1 1 there's a middle state 1 0 there's an anti-stretch state 1 minus 1 like this the dimension of the space which is 2 j plus 1 in this case is equal to 3 so the matrices are 3 by 3 matrices you see each time you increase the angular momentum by half unit the matrices goes 1 dimensional, 2 dimensional now 3 dimensional jz as always is easy because it's just the magnetic quantum numbers on the diagonal and this makes 1 0 minus 1 and otherwise it's 0 is everywhere else if you work on j plus what you find is that is that if you lower the anti-stretch state you get square root of 2 times the middle state so this is this column if you lower the middle state it's square root of 2 times the stretch state so the center column is this and then the first column is 0 because if you try to raise the stretch state you can't stretch it anymore so you just get 0 so this is j plus or spin 1 or angular momentum 1 system j minus is going to transpose of that to that you can get jx and jy by taking linear combinations so this is that an example of the matrices representing angular momentum of various values of j now what about the rotation operators up here those are functions of j2 so we're particularly interested in the matrix elements jm of a rotation operator which I'll write let's say at this angle 4 and I guess I'll put the prime right hand side now either way of course let's consider this matrix element because this is a a function of j I made the 2j values the same on both sides because if they weren't the same the answer would be 0 but I love the m and m prime to be different so this is going to be a function of j m and m prime obviously and it will also be a function of the axis and angle of rotation it's really a matrix and there's a standard notation which is given for this matrix which I'll show you now it's given by the symbol capital D with a j superscript and an m prime subscript and then we label the rotation by axis and angle like this the difference between d and u is that u is the operator and v is the matrix and v is the standard angle of minimum basis in some cases I'm sloppy and I don't distinguish those two but here I'm making a distinction between the matrix and the operator the operator is the matrix and the operator is the same axis and angle this is just a definition of what they call the D matrices sometimes called the bigger rotation matrices in the literature D here by the way comes from the german Dreyon which means rotation and it's now a standard standard notation for rotation matrices if the rotation is about the z axis the D matrix is particularly simple because in this case if I've got so let me turn it around and let's write this as dj m prime of n hat comma theta let's make it z hat let's consider a D matrix for a rotation by the z axis by definition this is just jm and jm prime sandwiched around the rotation operator in the z direction so that's the same thing as my angle theta but that's the same thing as e to the minus i theta times jz and the reason this is simple is because jz is diagonal the standard angle of minimum basis jz acting on this kind of the right hand side is just going to bring up even prime but even what's left over is a chronicle delta and even prime so the answer is that this is even the minus i theta m times chronicle delta and even prime there you are that's an example of the D matrix for rotations about the z axis particularly simple this comes about of course because we chose to diagonalize jz in creating the standard angle of minimum basis so for example in the case of a a case of a spin one half particle this would turn into e to the minus i theta over 2 on the upper diagonal e to the plus i theta over 2 on the lower diagonal 0, 0 is on the two sides so this is for the cases of j equals one half for example a spin one half particle this is the D matrix in that case I hope you recall that when we were talking about spin one half rotations we encountered the fact that if you rotate an electron by 360 degrees it doesn't return to itself but a phase change of minus one and that's because when theta is equal to 2 pi each of these factors in the diagonal becomes minus one well this is just the case of a rotation of z axis but it's actually true for rotating by any axis you get minus one if the angle is 2 pi so one thing you can see now is that the same thing actually applies for any half integer value of angular momentum because here's the general components of the D matrix the rotation of the z axis and it has the m up here if j is a half integer such as one half, three halves, five halves, etc then the m values and the magnetic quantum numbers are half integers also they go for example from minus five halves up to plus five halves and so if you put theta equals 2 pi in here each of these diagonal matrix elements is going to become minus one so the result is is that the D matrix for a half integer angular momentum with an angle of 2 pi is equal to minus one it's the same minus one phase factor that the solve in spin one half particles on the other hand if the angular momentum is an integer 0, 1, 2, 3, 4 then the m that occurs here is an integer and so you just get e to the i integer theta on the diagonals and if you set theta equals 2 pi then all these diagonal elements become plus one so for systems with integer angular momentum if you rotate by 2 pi it returns a system to its original state in other words you get what they call a faithful representation of the classical rotations that's the same pulse the same rule as classical rotations it's only for half an integer spin that you get these double-valued or half an integer angular momentum in general that you get these double-valued angular double-valued representations of rotations alright anyway this is just the definition of the D matrices and I've only worked it out in a simple case in which the axis is is the z hat there's another just a slight variation in this notation when one is interested in the Euler angular representation of the matrices rather than the axis angle representation so let's take our basis states Jn and Jm prime let's sandwich them around a rotation-operated written in Euler angle form and alternate again well this is a definition we'll call this again D matrix with Jn and Jm prime indicating the J value of the matrix of the matrices of the matrix of the magnet prime of the alpha, beta, and gamma instead of the n hat and theta this is a very slight modification of the notation it turns out that there's some simplification that can be made in the case of of D matrices in the Euler angle form and that's because I'll remind you that this U alpha, beta, gamma is the product of a rotation about the z-axis by angle alpha times the rotation about the y-axis by angle beta times the rotation of the gamma this is the z-y-z convention for Euler angles which we discussed earlier and so this rotation operator here factors into a product of three rotation operators and we can insert resolutions of the angle between the pairs of rotation operators and if we do again an expression like this let's call the N1 and N2 the indices of the resolution of the identity so we get an element J him and we get Uz of alpha and then J him 1 we want to make it solid excuse me and then you just hear z and 2 of it are you at z-y-z? no it should be z-y-z z-y-z take a look at the notes on the Euler angles and you'll see why it is, it's a z-y-z convention I think I mentioned that in classical mechanics books they usually use z-x-z but Fignan is responsible for N2 and z-y-z in one mechanism that we use it for why it's more convenient for quantum mechanics anyway I'll tell you in a minute why so these three rotations factor by inserting resolutions of the identity the second factor is J1 and M1 and Uy and beta J2 and J it's not J, it's J1 excuse me on the left, Uy beta in the middle Jm2 on the right and then the third main consultant is Jm2 on the left Uz gamma times Jm prime on the right just writing as a product of three matrices here however as we've just seen rotations of the z-axis are diagonal from the largest above so the first factor becomes e to the minus i i-alpha m times prong of a delta m in one but the final factor becomes e to the minus i-gamma m prime times prong of a delta m2 m prime and the result of this is the sum can be done because you've got diagonal matrices on the right and the left and then we've got the and the only non-trivial part is the y rotation which appears in the middle so people make a special notation for this y rotation here which is the non-trivial part my definition is written this way it's another D matrix but they use a lowercase D because it only depends on one number angle not in three it's written this way lowercase d to the middle Euler angle. And if you like, this is the same thing as e to the minus, this is one of the matrix elements, jm e to the minus i, e to the minus i theta times jy jm prime. That's the definition of the law of case D matrix. Anyway, the result is the capital D matrix written in Euler angle form, capital T jm prime of alpha beta gamma is able to e to the minus i alpha m times e to the minus i gamma m prime times more case D jm m prime of theta. And so as I say, the alpha and gamma Euler angles are trivial and it's only the middle one and the beta Euler angle that takes some work. But that's because matrices for jy are not diagonal. So computer exponentials have got to do some work here to get to this. In any case, the result of this is that if you look at tables of rotation matrices, they never tabulate the capital D matrices, they just tabulate the lower case D matrices. And they're considerably simpler because they only depend on one parameter, the beta. There's formulas for them. Maybe I'll just mention a couple of them. If we take the case of j equals 0, then we have prime of beta, the lower case D matrix. Well, if j is 0, then m and n prime can only be 0, 2. So this is going to be a 1 by 1 matrix. It's really going to be an exponential of that jy matrix up there, which is 0, you see. Well, an exponential of 0 is 1. So this matrix is equal to 1. So j equals 0. Once again, in fact, more generally, if I take D0 and n prime of any matrix, any matrix, let's say 1 by n form the answer, it's just 1. Any of the rotation matrices of a system of 0 angular momentum are just 1 by 1 matrices containing 1 because they're exponentials of 0. And what that means is that a system that has 0 angular momentum is rotationally invariant. If you apply this matrix to a vector, a one-dimensional vector, this whole plus of 1 does nothing to it. j equals 0 systems are rotationally invariant. That's an important thing to keep in mind. If you talk about wave functions, they're called S waves. That means 0 angular momentum. But it's true in general as well for spin. The total angular momentum is 0. The whole thing is rotationally invariant. So that's the case of j equals 0. If we do the case of j equals 1, we worked out the rotation matrices. Excuse me. Let's go on to the next case, which is j equals 1 half. For j equals 1 half, we already worked out in the last lecture, I think it was, or two of them, two ago, the rotation matrices for spinally half systems. And in particular, if you want to rotate about the y-axis, here's what you get. You get d 1 half m m prime of beta. This is the same thing as e to the minus i beta over 2 times sigma y. And if you work it out, it turns into a cosine of beta over 2 minus sine of beta over 2, sine of beta over 2, and cosine of beta over 2. Like this. It's a non-trivial rotation matrix. I can tell you why Degner decided to use the Z-Y-Z convention for Euler-Engels instead of the Z-X-Z. The reason is because the conventions we're using here for the matrix elements, the matrix elements of the raising of lowering operator is real. And that's why we have this famous square root of it with no other face at it. And if you do that, then you can see that the matrix elements of Jx are going to be real, too, because it's J plus plus J minus over 2. But that means the matrix elements of Jy are going to be purely imaginary. You know this already from the standpoint of polynomial matrices, but it's true for any value of angular momentum, not just J plus a half. And so when you exponentiate these J matrices, you get these rotations. For example, a rotation about the y-axis can be either the minus i theta times Jy. Well, the matrix for Jy is purely imaginary, multiplying by i's, and that can be purely real. So the result is that these rotation matrices will be purely real matrices, as you see in this example right here for spending half. Whereas if you use the X convention, then they'd be complex matrices. It just simplifies things, and it means that these lowercase D matrices, which are the essence of that, are purely not to be purely real with this ZYC convention. That's the reason it's done. Okay. Okay, I think that's all I want to say about rotation matrices. I did just the 0 and 1 half case. I'll leave it as an exercise for you to look at the J equals 1 case. There, as I say, the essence is to do the y rotation. So what you need to do is to work out the matrix elements for Jy, and you do that by adding those matrix elements for J plus and minus and dividing by 2i. And then you need to exponentiate the series, and you can do this by the means that you've done already for exponentiating matrices. You try to express higher powers in terms of lower powers. It's easy to do. You collect the terms together and get a trigonometric series. Anyway, the calculation is summarized in the notes that's also done in a stockerized book. And it's similar to other such calculations that we've done in the past. When you've done, you'll end up with this 3 by 3 matrix, which is used for rotating systems of angular momentum that equal to 1. Del. There's one final topic on this. There's some limitations which I want to mention, and that is the story of adjoin formulas. And I'll be fairly brief about that because we've seen two different versions of adjoin formulas already. When we did the case of the special case of a spin line path, the adjoin formula was this. It's that if I have a rotation that can access any form where I use it to conjugate tolerance to these inverse clusters, they're unitary, then this is the classical rotation actually with an inverse multiplying on the singular. So this is how it worked out for the case of a spin row path. Now, of course, the spin one-half signal is proportional to the angular momentum that j is equal to 1-half signal against h bar equals 1. Well, then, in view of this formula, it won't be surprising to learn that for general angular momentum, we have an adjoin formula that looks like this. It's that if you take the rotation operator and use to be conjugating angular momentum itself, what you get is the classical rotation inverse multiplying on the angular momentum vector. And I think I will spend class time proving this. The proof is in the notes and it's straightforward. Actually, there's several different proofs that you can construct, but it's such an obvious generalization of the spin one-half case that I'll just leave it at that. But as I said, these are very useful formulas because they allow you to take a rotation operator and pull it through an angular momentum, which is something that you end up doing quite a lot in practice. Now, in the remaining 10 minutes, then I want to turn to a new topic which we'll take up further next hour, which is the subject of spins and magnetic fields. I'll just give you an example of real problems there. There's a whole host of issues which are typically glossed over and ignored in the introductory courses, and I'd like to pay some attention to them because they're connected with a fundamental understanding of what's involved with spins and magnetic fields. I'd like to begin by just reviewing some classical electromagnetism regarding multiples in magnetic moments and things of that sort. So for the time being, I'm going to do classical mechanics and it will be classical E and M. So hopefully this is just a reminder for you. If we have some region, some localized region of space that contains a charged density and a current density, maybe functions of position. Yes? Is this also in your notes or is it in the magnetic field? No, there's some notes on this. I've got to say I'm not very happy with the notes and I won't have time to rewrite them, I apologize for them. They really need to be reworked. But anyway, I'll try to give the most logical presentation of material in lectures. I think it's a little better straightened out from the notes. But there's a set of notes on this that's coming up. So anyway, talking about classical electromagnetism of multiple expansions, it's not something like this. We have a localized charge and current distribution and here J is the electric current now, not angular momentum anymore. Then this, of course, produces electric and magnetic fields and the electric and magnetic fields can be represented as a sum of multiples, multiple fields. And the first term is the monopole field and the second term is the dipole field and then the next term is the quadruple field and so on up. Next one is octopole and so on like this. And these fields are, this is an expansion of the electric and magnetic fields and they're characterized by the manner in which they fall off at distance. The electric field falls off as one of our, excuse me, the monopole field falls off as one of our squared. The dipole is one of our cubed and the quadruple is one of our fourth and so on like this. The result is that if you're at a large distance away from the localized current distribution and it's the first non-managing term in this series is the one that's going to dominate. For example, if the total charge is non-zero at large distance, it's just the electric field will be dominated by the monopole field which is basically the Coulomb field but it'll look like all the charges concentrate at a point. Now in the case of magnetic fields, however, there is no monopole field because as far as we know, magnetic monopoles don't exist. So the leading term for the magnetic field is the dipole term and that's the term that will dominate at large distances. Of course, if you come into short distances near the charge of current distribution, you'll require multiple moments to become important and you won't take them to the top, too. So it's really in large distances where the leading term is important. All right. Now, I mainly want to concentrate on the magnetic dipole term because I want to talk about spins in magnetic moments. So the magnetic moment the dipole magnetic field of the localized current distribution is described, quantified by the magnetic moment. The magnetic moment view is defined this way, the Gaussian units. It's one over two C. It's an integral over space which is the current distribution of the position vector crossed into the current band bar like this. This is just the definition of it. For example, if you take a current distribution, which is a current going around the wire loop, like this, then you'll find the magnetic moment view as a factor which is perpendicular to the loop. You can take calculations like this, like so. Now, let's suppose we take this current loop or this magnetic moment to lead in general. We put it in this couple of magnetic moments. Then just by calculating the V cross B force on the current, in the words J cross B, you can calculate, for example, the total force on the current loop and other things too. Calculating the force, here's what you find is the force is the gradient of U dotted into V. The magnetic moment view is not a function of space. It's just a property of the current distribution, but it may be a function of time. For example, the loop is moving around. So when you do this gradient here, the gradient can only act on V, which itself may be a function of space. In particular, if the V is uniform, then this force is zero. There isn't no force on a net force on a magnetic dipole in a uniform magnetic field. That's why you need non-uniform fields in the Stern-Gerlach apparatus in order to get the particles to move. Actually, this result which is obtained here is obtained on the assumption that V is varying slowly or the dipole is a small spatial extent because you're really expanding the magnetic field about, let's say, the center of the dipole. If you could be more careful about this, I should hire over terms of this. This is a leading term. For a small dipole, this is true. There's also a torque on the dipole which you can calculate, and this is a mu cross V. The torque is, of course, since this is classical mechanics. The torque is the same thing with the time derivative, the angle of the dipole. Now, the torque brings up the question of what is the angle momentum of the dipole. And this is still new classical mechanics. The, you see, the magnetic moment is clear from the formula. It is dependent upon the charge, the current distribution. In other words, the charges of how they're moving, distribution of charges of how they're moving. Whereas the angle momentum, of course, is dependent on the distribution of mass and how it's moving. Those don't have to be the same. You could have neutral particles that are moving around in a circle and creating angle momentum, but they wouldn't create any magnetic moment at all. So, that one, that one, the mu are different vectors in the classical problem, and in general, the L vectors are going to stick them off like this in some particular direction. Nevertheless, there are some simple examples in which the, in which the angle momentum and the magnetic moment are proportional to one another. Simplest one is the case of a charged particle in a circular orbit. This is still classical. Let's say the orbit, as a radius of r, the particle's got a velocity b like this, and let's say it's moving with constant velocity in a circular orbit, and let's say the charge is q. Well, in this case, the angle momentum L is, of course, 10 times r cross b, and that's a vector which is perpendicular to the orbit like this. And if you use this formula to compute the magnetic moment, you find that it's in the same direction also, and that mu is equal to q over twice mc, charge over twice mc, both by terms of the angle momentum L, so mu is in the same direction like this. So mu and L, the magnetic moment in angle momentum, are in fact proportional in a similar example like this. All right. Now, the question arises is whether mu and L are proportional to one another also in quantum mechanics. And the answer to this is so much complicated, I may have gone into too much detail of this in the notes, so let me try to just simplify it here by saying that in many important cases in quantum mechanics, they're also proportional there too, although not always. In fact, to be a little more general about it, you can say that in quantum mechanics, mu and the angle momentum, which now may include spin angle momentum, the magnetic moment in the angle momentum are proportional if the applied magnetic field is small enough to quantify that later on. But in any case, let me give you some important examples. If we're talking about an orbital angle momentum in quantum mechanics, so for example, considering the orbital motion of an electron in an atom that has an orbital angle momentum, then it turns out that the classical formula that comes from this simple current loop can be carried directly over in quantum mechanics with no modification. We get mu is equal to q over 2 mc times l for the orbital angle momentum. One thing I meant to point out from the board of above, which I failed to do, is that this is a vector, let me write it down again. This is a vectorial relation between these two vectors. They may be pointing, they may be parallel to or anti-parallel to one another, but there's a coefficient in here and it involves the charge. And so in particular, if the charge is negative, then the magnetic moment is the angular momentum pointing in different directions. You see, these are physically two different things. Magnetic moment can be technically experimentally by interacting with a magnetic field. The angular momentum is a mechanical thing. You can measure that, for example, by using conservation of angular momentum and having an interactive load system. They're very different things. They indicate for a negative particle they will be pointing in the opposite directions whereas for a positive particle, they'll point in the same direction. This carries over in quantum mechanics as well as far as the orbital angular momentum is concerned, you have the same relationship with the orbital angular momentum. Now, particles, whether they are so-called elementary particles or so-called composite particles, have a property also of angular momentum which is generally called spin. And in the case of a particle with spin, there is still a proportionality between the magnetic moment particles generally have magnetic moments. And there's a proportionality between the magnetic moment and the angular momentum except notationally we write the angular momentum as s just to indicate spin. And we split off a factor of q over 2 mc which gets the dimensions right but then in general it's necessary to put in a fudge factor or a scaling factor called g and I'm sorry my q looks like a g I'll try to make it as different as possible. There's a g factor here which is a dimensionless factor which needs to be introduced in order to quantify the proportionality between mu and the spin s. And so these are the g factors of particles, the various particles one can consider. Let me just make one more remark before I let you go is that the spin which occurs here in our particle, in the case of a composite particle the spin is regarded as a total angular momentum of the particle of all of its internal structure added together. For example, the proton is made out of three quarks and what we call the spin of the proton which is one half is really the sum of the spins of the three quarks as well as their orbital angular momentum all added together. For another example, the proton is a composite particle made out of a proton and a neutron. And what we call the spin of the neutron is the sum of the spins plus orbital angular momentum of the proton and neutron regarded as a two-body system orbiting around each other. The same applies for more complicated nuclei which have larger numbers of protons and neutrons. Yes. And why don't we see the protons and the quarks? I mean why couldn't we go to an excited state where you've got a different orbital? Well that would change the energy so you'd be talking about an excited state of the proton and in the sense you do see that these are the baryons in some sense the baryons like the lambda particles and so on. These are the sigma particles. These could be regarded as excited states of the proton neutron system. Actually in the sense you do see that. All right. That's all for today. If you want to pursue that next time ask me again because that's just a question.