 I'd like to start today by elaborating this concept of irreducible, invariant subspaces that I've talked about quite a bit, and put it in a general context, and to show you how it's related to subspaces and operators, and ultimately the Big Red Heart theorem, which is one of the main goals of these recent presentations. So, to begin with, let's be rather general about the space in question. Let's call the vector space D just to get a general notation for it. In fact, we've talked about two different types of vector spaces so far, upon which rotations act, and this is part of the summary here from last lecture, one of the spaces of Kess. We found that we rotate the basis vectors, the linear combinations of those basis vectors, if we make diseases co-efficient. And also, in ordinary three-dimensional space, if you use a spherical basis, you rotate the basis vectors, you get the same kind of linear combination, showing that the spherical basis is actually is a standard negative in the basis of an ordinary three-dimensional space. Anyway, part of my point is there's two very different kinds of vector spaces on which the same mathematics applies. And the main emphasis today is going to be on yet a third kind of a vector space, which is namely the vector space of operators. So, in any case, for now, let me just call the D to be rather general about the vector space. And let's suppose we have a unitary representation of rotations that act on this and make a little space here so I can write it out. They act on their vector space V such that the operators reproduce the multiplication laws of rotations. Now, this vector space V may have an invariant subspace, explaining what that means. Let's call S a vector subspace of our space V, and we can picture it here like this if we're in three dimensions, let's say here's a subspace that's like this, and here's the origin. Let me draw here the perpendicular space, which I'll call S perp, which is only possible to the given subspace. Now, we say a subspace is invariant under the action of rotations if the rotation operators take any vector in the subspace and map them into another vector in the same subspace like this, or it is confined in the subspace itself. Now, we can show that if you have an invariant subspace, we can show that the orthogonal subspace is also invariant if these are unitary operators. And the result of this is that if you take a basis, which is an orthonormal basis in the subspace, plus another orthonormal basis in the orthogonal space, but in that basis, the matrices, which represent our rotation operators, who are, they have a block diagonal form. The dimension of these two spaces, these, the estimates with the orthogonal component don't have to be the same. So I'll kind of draw the block diagonal like this with a small block and a large block. But the point is that the orthogonal blocks are all zero if you have an invariant subspace as I've described. In terms of diagonal blocks, they can contain matrices which in general are not zero, so I'll just draw them as x's here. Now, the subspace s may itself possess an invariant subspace, a smaller space which is invariant. If it does, then we say the subspace s is reducible. If it does not possess a smaller invariant subspace, then we call it irreducible. Let's suppose that s is reducible. What that means is that we can do another change of basis inside here with some new set of basis vectors, splitting s into smaller, orthogonal subspaces, and then this lower block here, let's say the lower block corresponds to s and we have more corresponds to s per. When this becomes block diagonalized, there was an off diagonal and then, generally, non-zero matrices under diagonal like this. So every time you can take one of these spaces in a space which is reducible, break it into smaller, a smaller invariant space corresponds to a successive block diagonalization of the rotation operators. This is important in quantum mechanics because when we diagonalize matrices, it's a lot easier to deal with small matrices than it is with big ones. This is one of the advantages of this whole subject. In any case, what we can do then is to proceed breaking subspaces, reducible subspaces into smaller ones, invariant subspaces, until all the subspaces we have are irreducible. At that point, no more decomposition can be achieved. In the matrix then, the matrix then, finally, is represented by a bunch of blocks in a diagonal where each block corresponds to an invariant irreducible subspace. Thus, the invariant irreducible subspaces in effect form the building blocks out of which the representation of UMR is made. All representations of the rotations on any vector space whatsoever ultimately can be put into the form of block-diagonal matrices where the diagonal blocks are of standard form that just correspond to the angular momentum that correspond to the given blocks. Now, in the case of cut spaces, we spoke of the standard angular momentum basis which just came from diagonalizing J squared and Jz and usually raising the lowering operators. So, if our vector space V, let's say it is a cut space, more weighted like this, then I can say what the invariant irreducible subspaces are in terms of the standard angular momentum basis. These subspaces are the, I'll call them E gamma J. They depend on the first two point numbers that got on them. In fact, they're just the span of these basis factors, gamma Jm, in which M runs over its range of minus J plus J. Now, let me solve this. This is the dimension of these, these are the irreducible invariant subspaces is equal to 2J plus 1, like this. And so that's what happens when you do the block diagonalization. Now, the irreducible invariant subspaces, each one has a dimension of 2J plus 1. This is a different J value for each block. All right. So this is how it works out with cut spaces. And this is the basic definition of irreducible invariant subspaces. Now, this is the beginning of a much more elaborate theory you can take other courses on. And I don't intend to go into this in the course of some group theory. I don't intend to go into this very deeply, but I do want to show you how this applies now in the case in which the vector space is the space of the operators. Let's turn our attention to that. So the idea here is that we have a cut space as usual represents the wave functions in our quantum mechanical system. So we also have operators on the cut space. So examples of this include like the Hamiltonian or L squared or LC. It's a hydrogen atom in the position XYZ coordinates of electron, momentum, and so on. There's a whole list of physical operators to use. Now, the space of operators forms a vector space. If you form linear combinations of operators, you get a new operator. And moreover, it's a space that's a vector space like this. And moreover, it's a space that we've gone on. Because if A is an operator, we've seen that if you rotate it, it goes into the operator U of R times A times U of R dagger. This is the definition of the rotated operator that we were talking about last time. And so, the space of operators can be broken up into very irreducible subspaces. And those subspaces of operators are called irreducible tensor operators. And they're one of the main objects that I want to talk about today. So, let me just give you some examples of the varying subspaces of operators. We talked last time about scalar operator, we call them K. Scalar operator is one that's invariant under all rotations. So that K is equal to U of R times K times U of R dagger. And this holds for all possible rotations. It's invariant under rotations. We also have shown that this is equivalent to the statement that K can be explicitly by all three components of finite infinitesimal versions of the same statement. Well, let's consider the space of operators. If you think of operators as a big vector space, it is an infinite dimensional vector space. Let's consider the space of operators in which we multiply some specific K by a complex number A. If you think about this geometrically in the space of operators, it's like a line like this would be multiplying by K scales about them down. A is a complex number. So what's the space of operators? Multiple of them at mortality, for example. Well, since this equation is linear in K but I multiply by A in both sides, what you see is that any vector tends to say any operator in this space is mapped into another operator in the same space by conjugation of vector goes into itself, actually. And the result is that this one-dimensional space is invariant under rotations. The operator stands, scalar operator stands, dimensional space, invariant space, or spaces of operators that are invariant under rotations. K can be thought of as a basis operator when I multiply by K, there's a general operator in that space. And this space is automatically irreducible because in the original dimension one, you can't split it in any smaller spaces. Now, let me give you another example of an invariant subspace operator. This is an example of a vector operator of which we saw several examples in the last lecture. I'll remind you the definition of a vector operator is to say if we rotate the vector operator, let me do this in index form, I think it'll be easier. U of R, U of R dagger sandwiched around the I, the I component of the operator, this is the sum on J of the piece of J times R and J I in rotation matrix. So if we think about the three operators which are the three components of the vector operator Vx, Vy and Vc such as the components of momentum, for example, then what you see is think of these now as being three basis operators that span a three-dimensional space of operators. The general operator in this space would be a linear combination of these basis operators. That is to say we have the form A, Y, B, Y plus A, Z, B, Z where B, A, X, Y and Z are just ordinary numbers. They're an expansion of a linear combination who would normally write this as A dotted into B in this notation for A as a vector of numbers and B as a vector of operators. That's how you go with writing. But this is a general form of an operator that lies in this three-dimensional space of operators span by these basis vectors or basis operators. Now by the definition of the vector operator however to see that if you transform if you rotate any one of these basis operators you get a linear combination of the same three basis operators. And the result is that this entire subspace of operators is invariant in the rotations. In fact, it is not only invariant it's irreducible and has no smaller invariant subspace. I won't prove that because it won't be important for us but in fact this is the second example of an irreducible subspace of operators in addition to the scalars in the three-dimensional space. This is a three-dimensional space. All right. What about a tensor operator? We talked about a tensor operator, Tij last time is a tensor of operators that means there are nine operators and the transformation law is that if you conjugate a rotation which is what we call a rotated we take one of the components and we rotate it by definition of rotating operators this gives us a transformation law of the sum of K to L of the components TKL times RKI times RLJ like this and so what you see is that we have a nine-dimensional space of operators spanned by these components and in fact it is invariant in rotations because you get a linear combination of the same operators if you rotate any component. Yes. Is there a reason you switched from all of the inverse no, there is no reason for it it was a unitary operator all right so this is indeed an invariant subspace now the question is is it reducible or irreducible? Does it have any smaller invariant subspaces? Unlike the case of the scalar and the vector operators the space of second rank tensor operators actually is reusable here's an example of this if we take a trace of the tensor of TII this of course is another operator just the sum of the diagonal elements and we conjugate this by rotations like this this is equal to summing on I on the right-hand side is the sum of K to L of TKL and now all I need to do is just copy this except I'll set I equals to J so I get RKI times RLJ and the sum on I of these two R matrices because of the orthogonality R matrices gives us a quantum result in K and L so the result is it becomes just the sum on K of T is in KK as you see it's just the trace of the tensor operator all over again and so the tensor the trace of the tensor operator is invariant rotations therefore it forms a one dimensional tensor operator is a scalar by the definition of a scalar operator and thus the nine dimensional space of second bed tensor operators breaks up into a one dimensional space which is the trace scalar plus eight more dimensions eight more dimensions to the tensor operator now is this eight dimensional space reducible or irreducible the answer is it is also reducible it's reducible for this reason if we take the anti-symmetric part of the tensor operator we call it TIJ minus TJI by the convenience of divide by two that's really not important so we take the anti-symmetric part the anti-symmetric part of a three by three matrix can always be associated with a three vector it works like this we get 0,0,0 if I call a vector I don't know I just call it A and here we have a component it's 1,2,3 so it's A,C 1,2,1,3 is minus A,Y and then this is minus A,C and this is A,X and this is minus A,X and I guess that doesn't don't have a big Y here minus A,X here in any case it's possible to associate an anti-symmetric matrix with a vector and if you do this what you'll find is by conjugating this by U I'll skip the calculation for what you'll find is an anti-symmetric tensor by the U which transforms it into another anti-symmetric tensor and therefore the three independent components get mixed up amongst themselves and what we have is a three-dimensional invariant sub-space and so this A breaks up into a three-dimensional invariant sub-space which in fact is irreducible plus what's left over is five dimensions and as far as the final fog dimensions those that stand by operators which are symmetric because we took that part out already so they have to be symmetric but they can't have any trace either because we took that already in the scalar and so a basis for the remaining five-dimensional space is the symmetric part which I'll write this way wouldn't have Tij plus Tji minus minus something to make a traceless and something is this it's 1 third delta ij that's the trace of the tensor like this because if I take the trace of this then the trace of delta ij is 3 which answers to 3 and the trace of T cancels the trace of T to get 0 so this is symmetric and traceless and there's actually five independent components of that which forms a basis of this this final space so what we give is a decomposition of an invariant sub-space of operators with nine basis operators which are 1 plus 3 plus 5 dimensional invariant in irreducible sub-spaces this dimensionality count not by accident is the same as what you get in an Eclipse-Gordon calculation which would combine two angular momentum one together and what you get coming out according to the rules is you get 0 plus 1 plus 2 these are the allowed values in the total angular momentum of 3 times 3 for the 2j equals 1's equals to 1 plus 3 plus 5 which equals 9 there's two ways of decomposing 9 this is not an accident either this decomposition of this tensor is closely related to the clutch-gordon decomposition of the product space of 2 j equals 1 angular momentum vector operators in a sense are j equals 1 objects and these tensor operators like this are one and two the most important symmetric traceless tensor the right here which is like a j equals 2 tensor or j equals 2 operator is the quadruple moment tensor and this is important in nuclear physics and I may say more about it later on in lecture this time I'll come back to quadruple moment tensors so to brief and summarize this is some examples of spaces of operators that are varying in rotations and how they can be broken into grids spaces and so on yes why is the symmetric matrix one have to be traceless? because we already took out the trace until we're interested in the basis of the remaining 5 dimensional space of operators and it's symmetric because we already took out the anti-symmetric part which is left over actually the quadruple moment tensor just turns out automatically it's symmetric and traceless and it's not an accident but if you go through the calculation it comes out that way now what I'd like to do is to define already vector tensor operators I'd now like to make a definition of a new class of operators not some new actually which are called irreducible tensor operators we call that a vector operator really means a vector of operators it's a collection of 3 to transform a certain way of rotations so an irreducible tensor operator it gives the following first of all it's not just one operator it's a collection of operators of 2k plus 1 operators in fact you might say a irreducible tensor operator of 1 or k if you want to put it in a little more formal the collection of 2k plus 1 operators that we denote this way we call it tkq t stands for tensor k is an index with q index range is from minus k plus k given you've only got a 2k plus 1 operators you can see from the notation that k is like an edge or momentum like it's like a j value and q is like a magnetic quantum number it behaves that way in terms of the labeling of the operators such that it has specified transformation properties under rotations and the transformation property suppression I'm going to put on the on the board that I've now covered up but I'll open it up we have the board in which I summarize the transformation properties of basis factors catch factors under rotations and also the spherical basis where a 3 dimensional space under rotations the irreducible tensor operator that if I take a tkq one of the components of the irreducible tensor operator and I rotate it by conjugating the rotations that's how you rotate operators what you get is the sum of q prime of tkq prime that is for the same value of k the different values of q prime with coefficients which are d matrices tkq prime q of r and then by comparing these three formulas you can appreciate that a tensor operator is called an irreducible tensor operator a collection of operators actually forms a standard angular basis of operators transforms under rotations exactly as a standard angular basis transform in the case of other vector spaces upon which the rotations act so that's what it is it's just the right hand side so it's just this rule for transforming basis factors that we use the rule for rotating operators over here instead of rotating vectors so it takes two years instead of one now let me give you some examples of this in the first place let's take the case k equals 0 if k equals 0 then q has only one value of 0 those from 0 to 0 so the irreducible tensor operator would be one such that u of r times t0 of 0 it has only one component u of r dagger by the definition this is the sum of q prime of t0 q prime times b0 q prime 0 of r however q prime can only take on the value of 0 that's the only term there is in the sum so this becomes t0 of 0 times d0 of 0 of r well the rotation matrices for the case of j equals 0 are just one by one matrices containing one the component is just being one as matrices these are all ones one by one matrices that's because j equals 0s are invariant in the rotations and so when you rotate they're just mobilized by one which doesn't do anything at all those are s-waves if you're talking about wave functions alright so the result is this is equal to t0 of 0 and so what you see this is that at k equals 0 the municipal tensor operator is the same as the scalar it's something that's invariant in the rotations so this applies to any scalar such as the Hamiltonian for isolated systems let's go on to the next example which is k equals 1 in this case we have 2 goes 0 plus and minus 1 it turns out that k equals 1 the municipal tensor operator can be created by taking any vector operator as we've defined up to this point our definition is Cartesian coordinates but if we go over to the spherical basis then what we obtain is a t1q so here's what we'll do I'll remind you that we call if we dot this with the the spherical basis vector e to the e to the q this is what we call e to the q and I'll write this also in another notation is t1q to indicate that it gives us the components of the irreducible tensor operator with k equals 1 it's a vector operator but expressed in terms of the spherical basis now to show that this really is such an irreducible tensor operator let's do this let's take this u of r t1q u of r dagger and let's calculate what it is it's the same thing as u of r times e to the q dotted into our vector operator p times u of r dagger the e to the q is just a vector of numbers so you can take this out what's left over is how you get a vector operator so this is equal to e to the q dotted into I mean just use the definition of how a vector operator transforms so this is dotted into r inverse y to the c that however we can shift our inverse over to e to the q the inverse dotted into the vector p now we know how the spherical basis vectors transform into rotations and I can lift my motor here this is just the they themselves form a a standard minimum basis so they transform into v matrices with j equals 1 so this r is at q to continue lining this out down here it becomes the sum of q prime of v hat of q prime times v1 q prime q of our rotation r that's the first vector here and so this whole thing is then dotted into our vector v the vector v only it's not this v hat q here so the result is it becomes someone q prime a piece of q prime times a d1 q prime q of r the result is that if I replace the dq by t prime q you see the definition of a k equals 1 irreducible tensor operator is produced so the second rule to get a k equals 1 irreducible tensor operator which is a collection of 3 operators since k equals 1 you get 3 q values all you need to do is take any vector operator and compute its components on a spherical basis I'll remind you that when we were looking at matrix elements for the transitions radiative transitions in hydrogen where I'm showing you some examples but in the last lecture we had these matrix elements here which were NLM with primes on them and a position operator was NLM on the right and then what we did was we took the Cartesian components of the radius vector r and replaced them by the spherical components q and the result of that was in fact to replace the operator in the middle by the standard components of an irreducible tensor operator so now what you've got is a standard angle of N of basis on the two sides you're taking matrix elements with respect to that and you've got an irreducible tensor operator in the middle these are the forms of matrix elements which are very common everywhere between some quantum mechanics and the bigger record there which I may go into today concerns of matrix elements that are precisely this type this was an example of such a thing alright now this is the case of the cake with one irreducible tensor operator without going into details let me say that if you take the quadruple moment operator you can which was expressed and it's actually didn't write it down but it's a Cartesian basis if you go over to a spherical basis then what you'll find is it's a cable's two irreducible tensor operator and the five components are the same as q running from 0 plus or minus 1 and plus or minus 2 the quadruple moment operator is important for the spectra of the nuclei and say more about that later in the course now I'm afraid I kind of covered up the actual definition I didn't finish it here because the definition is down here on this other board before I get into that let me remind you that in the case of a scalar operator we actually had two definitions one was that it was invariant in rotations before we found that it's equivalent to saying that the scalar operator can use the angular momentum this is in part two of those likewise we were talking about a vector operator in Cartesian components we found the definition of the vector operator in terms of finite rotations which I have not repeated here is equivalent to what's in effect an infinitesimal version which is a commutation relation of angular momentum that's these relations here I'll remind you that these two different versions are two different ways of defining scalar and vector operators now the same thing can be said for irreducible tensor operators is that there is a a finite version in terms of finite rotation which is just this and it just means that they form a standard angular momentum basis in an operator space however there's also an infinitesimal version that comes from allowing the rotation here to the infinitesimal an infinitesimal version turns into as you might expect commutation relations so let me I'll summarize the commutation relations over here and then I'll prove them for you so the point here now is to make an alternative or equivalent statement of the definition of an irreducible tensor operator in terms of commutation relations rather than the transformation property standard rotations so here's what the transformation properties are is that first of all if I take j z and I form the commutator with an irreducible tensor operator and then it sees k and q what it does is it just brings out the factor of q times tk and q as you see the q looks like a magnetic one remember conjugating with j c is like letting a j z act on a ket that is a magnetic one remember it just brings that one of them out if you want to be dimensionally right I'll go to the h bar there likewise if you have j plus or minus and you form the commutator with a tk and q you form the h bar times the usual square root except expressed in terms of k's and q's instead of j's and m's so it becomes k minus plus q times k plus of minus q plus 1 things for a root they only have tk with the q that's been raised from lower so if you raise from lower the components of a irreducible tensor operator by forming the commutator with the radius of the lower and then 1 is a third identity which is this the sum on i of the double commutator of j i commutator of j i with tk and q these are the Cartesian components it's equal to h bar squared times k times k plus 1 times tk and q this is a statement that in a sense an irreducible tensor operator is an eigen operator of j squared k plus 1 this gives meaning to the angular momentum of the operator these are these are the 3 infinitesimal versions are commutator versions that are equivalent to the single transformational other rotations allow me to actually prove these I'll do that by starting this definition I'll uncover it up just to preview what I'm going to do is just replace these two u's by infinitesimal rotation and likewise this d matrix here is the matrix element of the u that's what a d matrix means and right in this way is k q prime u of r k q that's just what a d matrix is I don't have to say what space these cats k and q lie in can be in a space that has never been in quantum numbers k and q because the matrices are independent of the nature of the space it's just convenient to write this as a matrix element of this way so I'm going to replace this u also by its infinitesimal version so let me do that so we will don't do that so the definition of a iridescal tensor operator if the rotation is infinitesimal looks like this here's an infinitesimal rotation I'll set h part as one to say writing here's the t k q here's the inverse or dagger of the infinitesimal rotation just change it to prime this is an axis angle form this should be the sum of u prime of t k q times infinitesimal t k q prime I would say infinitesimal d matrix which I'll write this way is k q prime and then 1 minus i theta 1 hat dot u j which is the middle k q now if we expand this out of the left-hand side what we get is t k q for the 1 here and the 1 there we get minus i theta and n hat dotted into j commutator of t k q because 0 is an n hat dot j times t k q and inverse the other way around Boston signs and then on the right-hand side through first order we get 1 here just gives us prime for delta and q q prime which just gives us t k q and then the second term is sort of plus well let me write this way minus i theta and then it's the sum of u prime of t k q prime times the nature of the sum of k q prime n hat dot u j to the k q of this t k q is leading in terms of first capital and so does the minus i theta like this now n hat that appears on both sides here is an arbitrary unit vector so you can choose it to be x hat y hat or c hat and thus you can just drop the n hat and you get a vector relation let me write that out we're going to have this relation here as the j commutator of t k q is equal to sum of q prime of t k q prime then we've got the matrix sum of k q prime and then just j in the middle j vector and then a q on the right like this now having done that I can dot both sides by any constant vector a so let me do that because a dotted into j here and a dotted into j there really all I've done is just replace the n hat by an a which is okay since this works for any n hat and so this is a basic commutation relation now let's take some special cases of this first of all take the case of a hat is equal to z hat excuse me a vector is equal to z hat a doesn't have to be a unit vector but it could be let's let it be equal to z hat in which case we get a commutator of j z of t k q is equal to sum of q prime of t k q prime and then this becomes the matrix sum of j z j z acting on k q this brings out the q and the rest of it is a chronic result so this becomes q and it's a chronic result of q q prime and so the result is that it's q times t k q a vector commutator of j z and the result is is we get this commutator q now if I choose the vector a to be equal to let's say x hat plus i y hat then this becomes a j plus this becomes a j plus and we get exactly the same way we get the second commutator plus and minus and then finally let it be equal to one of the unit vectors what Cartesian unit vectors I was calling c i in yesterday's lecture that just means x hat y hat or c hat then you get one of these commutators with j i you need to repeat it I don't want to go through the details in class you need to repeat it because it's a double commutator you're finding some resolution of the identity that becomes the binomial and you get the final relationship so in terms of so these these equations are equations involving operators which are completely analogous to what happens when you apply j z j plus and minus and j squared to a vector space which is a cat space you bring out exactly the same kind of factors on the right-hand side and that's just another way of saying that this tk q is for standard angular momentum basis except in this case it's operators instead of vectors now now let me turn to the vigner-reckhardt theorem one of the main results the vigner-reckhardt theorem could get used quite a lot of course for the next semester it's especially useful for understanding radiative transitions what it does vigner-reckhardt theorem concerns matrix elements and grid useful tensor operators let a tk q sandwich around basis vectors from the standard angular momentum basis on your cat space so let's say it's we call it in general language we call this gamma chain gamma took primes on the left and unprimed gamma chain on the right so it's a general matrix element an irreducible tensor operator with respect to standard angular momentum basis example that we saw this last time was the from the radiative transitions in hydrogen was this where the rq is the same thing as the t1q is the k plus one irreducible tensor operator and you may recall we had a large number of matrix elements to evaluate because the magnetic quantum numbers the m and m prime as well as the q were variables there giving us an example I mentioned last time 45 different combinations so it's the the bigger rq in particular addresses the dependence of this matrix element on the three magnetic quantum numbers and what it says is that this thing is proportional to we'll leave space for that proportionality factor it's proportional to the clutch Gordon coefficient which contains these magnetic quantum numbers the clutch Gordon coefficient is this it's j prime m prime scalar product with with j k m q this is the standard where it right times the proportionality factor and what the proportionality factor depends on is everything else except for the magnetic quantum numbers so it depends on the m prime j prime it depends on the t depends on the k but not on the m prime q or m and here's the standard way of writing that proportionality factor it's sometimes called a reduced matrix element it's written like an ordinary matrix element except we have magnetic quantum numbers and we put double bars here so here's the notation for it it is gamma prime j prime double bar tk double bar gamma j like this and this is the this is the big record there yes this intercollar j k now what space does that end well think of it this way this is a clutch-garden coefficient so the clutch-garden coefficient is what you get when you combine two spaces with fixed angular momentum in this case think of combining two spaces with angular momentum j and k so that's like so it could be in any space in which you started with which you had two spaces angular momentum j and k and you could combine them together so then m and q would be the magnetic quantum numbers yes m and q are the magnetic quantum numbers exactly yes so this is like I think well I'm not sure I remember what general notation I used for clutch-garden coefficients but it's like a total angular momentum sometimes they use capital letters for the total angular momentum and then you write j1, j2, m1, m2 for the two constituents this would be a common way of writing a clutch-garden coefficient that's what this means okay so notice here that if I label the angular momentum in clutch-garden coefficients 1, 2 and 3 is moving from left to right like this and where they appear in the original matrix element is 1 on the left and then 3 for the k and then 2 over here for the for the j and 3 and then 2 are swapped in other words this is only a convention and someone couldn't find it the other way around so they'd be in the same order I don't like this convention because it makes the bigger vector harder to remember and if you're worried about it just look it up, I don't expect you to memorize it but although I don't expect to understand the structure of it but in any case I think the reason that people do this is because in practice these k's may are often kind of small but j's could be large because for example if you're doing dipole transitions k is equal to 1 but the j is angular momentum state so it shouldn't be anything to be quite large so they're like put it in this order so you're having a small k to look to a larger k anyway I'm just guessing that's why they're doing this way the theorem would still be true if it's swapped it's just going to be a different definition of the reduced vector itself alright so that's the big reference there now let me tell you what it's good for it's really good for two things probably the most commonly used thing is that it's good for selection rules because it tells you that the matrix on the left hand side vanishes yet the clutch Gordon coefficient vanishes but we know that there are selection rules for the clutch Gordon coefficient so this whole thing becomes equal to 0 unless the selection rules for the clutch Gordon coefficient is satisfied that is first of all the j prime angular momentum quantum number must be reachable by combining j plus k so unless along with a set of numbers it starts with the absolute value of j minus k goes to integer steps absolute value of j minus k plus 1 all the way up to the maximum of j plus k secondly there's a selection along magnetic quantum numbers the m prime on the left has to be equal to some of the m's on the other side so we also have to have m prime is equal to m plus q now if we applied this let's go back to our example n prime l prime m prime r q which is a t y q in l m for radiated transitions in hydrogen or alkali atoms hydrogen like atoms or alkali atoms would be the same this is an unlikely static model these selection rules would tell us first of all that this is equal to 0 unless the final l prime the angular momentum of the final state this is the initial state that's the final state of where we set it up the angular momentum of the initial state l prime must be equal to either l minus 1 l or l plus 1 these are the only three possibilities and there's another condition which is that the magnetic quantum number of the initial state must be equal to the magnetic quantum number of the final state plus q now the physical meaning of this is the following the transition we're starting out with a definite state of the atom that has definite angular momentum that's the l prime m prime states here it's eigenvalues of l squared and l c after the transition has taken place we have an atom in the lower state but we also have a photon which is traveling away and what these what these rules are telling us is the angular momentum of the photon is equal to 1 the spin of the photon is 1 because combining 1 which is a spin combining 1 spin of the photon the angular momentum of the final atomic state we can only use that to reach the angular momentum of the initial state if l prime lies in this range likewise as far as the magnetic quantum number is concerned this is conservation of the z component of angular momentum in the emission of the photon so it says that the the total angular momentum of the j z of the initial state must be equal to the j z of the final state unless the j z of the photon that's what the q is it's the j z of the photon so the big record here is telling you about angular these selection rules are telling you about angular momentum conservation in the emission and absorption of radiation and lying behind this is the more fundamental rule which is that the angular momentum of the combined system of the atom plus the electromagnetic field is conserved this here we're thinking of an initial state of the photon so there is no angular momentum of the electromagnetic field initially it's only atomic or matter angular momentum and the final state is a mixture and it has to follow these rules alright anyway the selection rules which is what this is is probably the most common use of the the big record here is very easy to use won't you be a use for it now here's another application suppose as I described in Monday's lecture suppose we actually needed to compute these elements for all possible combinations of n prime q and m a lot of those would be 0 but then a lot of them are not 0 also well the bigger record here allows you to do this and it helps you to cut down the computational network rather substantially here's what you do is you look for a set of magnetic water numbers which satisfy the given conditions so that the making of this element in particular the clutch coordinate coefficient is not 0 then what you do is you calculate this matrix element on the left hand side you just have to do it by whatever means you have to do in the example we had last time that would be in one radial integral plus one angular integral for some specific values of q and m prime now once you've got that then so they have to do two messy integrals so once you've got that then you can then divide through by the clutch coordinate coefficient of this matrix element and once that is known then all the other known 0 matrix elements can be determined by just looking at calculating other clutch coordinate coefficients so it can be a tremendous labor saving device also in calculations like this actually in this example we ended up using the 3YLM formula to get the final answer for showing that it's proportional to the clutch coordinate coefficient the 3YLM formula is actually an example to figure out what it looks like this is a reminder it's just an angular angle it's the omega and you've got a YLM one that's conjugated and we've got two YLMs called L1M1 and L2M2 and the formula itself is rather a ball but there's a reading factor here and what's left over is a clutch coordinate coefficient it looks like this Lm, L1, L2, and L2 well this can be regarded as the matrix element like this let's put Lm on the left maybe we should regard this is just a product and put this in either order let's put the L2M2 in the middle this is the operator YL2M2 and then you've got an FNL1M1 on the right this can be regarded as the matrix elements or functions on the sphere between two basis vectors of a standard angular basis of an irreducible tensor operator and so the 3YLM formula is an example of this and moreover it's one of which by doing analytic calculations we actually determine, we'll write it out here we actually determine the reduced matrix so you can get that from looking at a 3YLM formula I won't repeat it but it's in the notes you can look that up and just pick it off and go official so those are the two primary applications that I think are important and we see quite a lot of those those applications that we can see before you go I'd like to say that it's already a few I'd like to have a make-up lecture on the not next Tuesday but Tuesday after that that will be the Tuesday of Thanksgiving week that means we'll have three lectures on Thanksgiving week on Monday, Tuesday and Wednesday and it kind of spreads out the it spreads out the effort because otherwise I'd like to have four lectures in one week in this way we've got three lectures all the way through here so unless I hear a lot of object